15 research outputs found

    Spatial information of fuzzy clustering based mean best artificial bee colony algorithm for phantom brain image segmentation

    Get PDF
    Fuzzy c-means algorithm (FCM) is among the most commonly used in the medical image segmentation process. Nevertheless, the traditional FCM clustering approach has been several weaknesses such as noise sensitivity and stuck in local optimum, due to FCM hasn’t able to consider the information of contextual. To solve FCM problems, this paper presented spatial information of fuzzy clustering-based mean best artificial bee colony algorithm, which is called SFCM-MeanABC. This proposed approach is used contextual information in the spatial fuzzy clustering algorithm to reduce sensitivity to noise and its used MeanABC capability of balancing between exploration and exploitation that is explore the positive and negative directions in search space to find the best solutions, which leads to avoiding stuck in a local optimum. The experiments are carried out on two kinds of brain images the Phantom MRI brain image with a different level of noise and simulated image. The performance of the SFCM-MeanABC approach shows promising results compared with SFCM-ABC and other stats of the arts

    A Variable Service Broker Routing Policy for data center selection in cloud analyst

    Get PDF
    AbstractCloud computing depends on sharing distributed computing resources to handle different services such as servers, storage and applications. The applications and infrastructures are provided as pay per use services through data center to the end user. The data centers are located at different geographic locations. However, these data centers can get overloaded with the increase number of client applications being serviced at the same time and location; this will degrade the overall QoS of the distributed services. Since different user applications may require different configuration and requirements, measuring the user applications performance of various resources is challenging. The service provider cannot make decisions for the right level of resources. Therefore, we propose a Variable Service Broker Routing Policy – VSBRP, which is a heuristic-based technique that aims to achieve minimum response time through considering the communication channel bandwidth, latency and the size of the job. The proposed service broker policy will also reduce the overloading of the data centers by redirecting the user requests to the next data center that yields better response and processing time. The simulation shows promising results in terms of response and processing time compared to other known broker policies from the literature

    Three-stage data generation algorithm for multiclass network intrusion detection with highly imbalanced dataset

    No full text
    The Internet plays a crucial role in our daily routines. Ensuring cybersecurity to Internet users will provide a safe online environment. Automatic network intrusion detection (NID) using machine learning algorithms has recently received increased attention recently. The NID model is prone to bias towards the classes with more training samples due to highly imbalanced datasets across different types of attacks. The challenge in generating additional training data for minority classes is the generation of insufficient data. The study's purpose is to address this challenge, which extends the data generation ability by proposing a three-stage data generation algorithm using the synthetic minority over-sampling technique, a generative adversarial network (GAN), and a variational autoencoder. A convolutional neural network is employed to extract the representative features from the data, which were fed into a support vector machine with a customised kernel function. An ablation study evaluated the effectiveness of the three-stage data generation, feature extraction, and customised kernel. This was followed by a performance comparison between our study and existing studies. The findings revealed that the proposed NID model achieved an accuracy of 91.9%–96.2% in the four benchmark datasets. In addition, it outperformed existing methods such as GAN-based deep neural networks, conditional Wasserstein GAN-based stacked autoencoder, synthesised minority oversampling technique-based random forest, and variational autoencoder-based deep neural network, by 1.51%–28.4%

    Botnet detection used fast-flux technique, based on adaptive dynamic evolving spiking neural network algorithm

    Get PDF
    A botnet refers to a group of machines. These machines are controlled distantly by a specific attacker. It represents a threat facing the web and data security. Fast-flux service network (FFSN) has been engaged by bot herders for cover malicious botnet activities. It has been engaged by bot herders for increasing the lifetime of malicious servers through changing the IP addresses of the domain name quickly. In the present research, we aimed to propose a new system. This system is named fast flux botnet catcher system (FFBCS). This system can detect FF-domains in an online mode using an adaptive dynamic evolving spiking neural network algorithm. Comparing with two other related approaches the proposed system shows a high level of detection accuracy, low false positive and negative rates, respectively. It shows a high performance. The algorithm's proposed adaptation increased the accuracy of the detection. For instance, this accuracy reached (98.76%) approximately.N/

    Image cyberbullying detection and recognition using transfer deep machine learning

    No full text
    Cyberbullying detection on social media platforms is increasingly important, necessitating robust computational methods. Current approaches, while promising, have not fully leveraged the combined strengths of deep learning and traditional machine learning for enhanced performance. Moreover, online content complexity requires models that can capture nuanced contexts beyond text, which many current methods lack. This research proposes a novel hybrid approach using deep learning models as feature extractors and machine learning classifiers to improve cyberbullying detection. Extracting features using pre-trained deep learning models like InceptionV3, ResNet50, and VGG16, then feeding them into classifiers like Logistic Regression and Support Vector Machines, enhances understanding of the complex contexts where cyberbullying occurs. Experiments on an image dataset showed that combining deep learning and machine learning achieved higher accuracy than using either approach alone. This novel framework bridges the gap in existing literature and contributes to broader efforts to combat cyberbullying through more nuanced, context-aware detection methods. The hybrid technique demonstrates the potential of blending deep learning's representation learning strengths with machine learning's sample efficiency and interpretability

    Digital image watermarking using discrete cosine transformation based linear modulation

    No full text
    Abstract The proportion of multimedia traffic in data networks has grown substantially as a result of advancements in IT. As a result, it's become necessary to address the following challenges in protecting multimedia data: prevention of unauthorized disclosure of sensitive data, in addition to tracking down the leak's origin, making sure no alterations may be made without permission, and safeguarding intellectual property for digital assets. watermarking is a technique developed to combat this issue, which transfer secure data over the network. The main goal of invisible watermarking is a hidden exchange of data and a message from being discovered by a third party. The objective of this work is to develop a digital image watermarking using discrete cosine transformation based linear modulation. This paper proposed an invisible watermarking method for embedding information into the transformation domain for the grey scale images. This method used the embedding of a stego-text into the least significant bit (LSB) of the Discrete Cosine Transformation (DCT) coefficient by using a linear modulation algorithm. Also, a stego-text is embedded with different sizes ten times within images after embedding the stego-image immune to different kinds of attack, such as salt and pepper, rotation, cropping, and JPEG compression with different criteria. The proposed method is tested using four benchmark images. Also, to evaluate the embedding effect, PSNR, NC and BER are calculated. The outcomes show that the proposed approach is practical and robust, where the obtained results are promising and do not raise any suspicion. In addition, it has a large capacity, and its results are imperceptible, especially when 1bit/block is embedded

    Software Defect Prediction Using Wrapper Feature Selection Based on Dynamic Re-Ranking Strategy

    No full text
    Finding defects early in a software system is a crucial task, as it creates adequate time for fixing such defects using available resources. Strategies such as symmetric testing have proven useful; however, its inability in differentiating incorrect implementations from correct ones is a drawback. Software defect prediction (SDP) is another feasible method that can be used for detecting defects early. Additionally, high dimensionality, a data quality problem, has a detrimental effect on the predictive capability of SDP models. Feature selection (FS) has been used as a feasible solution for solving the high dimensionality issue in SDP. According to current literature, the two basic forms of FS approaches are filter-based feature selection (FFS) and wrapper-based feature selection (WFS). Between the two, WFS approaches have been deemed to be superior. However, WFS methods have a high computational cost due to the unknown number of executions available for feature subset search, evaluation, and selection. This characteristic of WFS often leads to overfitting of classifier models due to its easy trapping in local maxima. The trapping of the WFS subset evaluator in local maxima can be overcome by using an effective search method in the evaluator process. Hence, this study proposes an enhanced WFS method that dynamically and iteratively selects features. The proposed enhanced WFS (EWFS) method is based on incrementally selecting features while considering previously selected features in its search space. The novelty of EWFS is based on the enhancement of the subset evaluation process of WFS methods by deploying a dynamic re-ranking strategy that iteratively selects germane features with a low subset evaluation cycle while not compromising the prediction performance of the ensuing model. For evaluation, EWFS was deployed with Decision Tree (DT) and Naïve Bayes classifiers on software defect datasets with varying granularities. The experimental findings revealed that EWFS outperformed existing metaheuristics and sequential search-based WFS approaches established in this work. Additionally, EWFS selected fewer features with less computational time as compared with existing metaheuristics and sequential search-based WFS methods
    corecore